In computer science, lattice problems are a class of optimization problems on lattices. The conjectured intractability of such problems is central to construction of secure lattice-based cryptosystems. For applications in such cryptosystems, lattices over vector spaces (often ) or free modules (often ) are generally considered.
For all the problems below, assume that we are given (in addition to other more specific inputs) a basis for the vector space V and a norm N. The norms usually considered are L2. However, other norms (such as Lp) are also considered and show up in variety of results.[1] Let denote the length of the shortest non-zero vector in the lattice L: .
Contents |
In SVP, a basis of a vector space V and a norm N (often L2) are given for a lattice L and one must find the shortest non-zero vector in V, as measured by N, in L. In other words, the algorithm should output a non-zero vector v such that .
In the -approximation version , one must find a non-zero lattice vector of length at most .
The exact version of the problem is NP-hard.[2] Approach techniques: Lenstra–Lenstra–Lovász lattice basis reduction algorithm produces a "relatively short vector" in polynomial time, but does not solve the problem. Kannan's HKZ basis reduction algorithm solves the problem in time where n is the dimension. Lastly, Schnorr presented a technique that interpolates between LLL and HKZ called Block Reduction. Block reduction works with HKZ bases and if the number of blocks is chosen to be larger than the dimension, the resulting algorithm Kannan's full HKZ basis reduction.
The problem consists of differentiating between the instances of SVP in which the answer is at most 1 or larger than , where can be a fixed function of , the number of vectors. Given a basis for the lattice, the algorithm must decide whether or . Like other promise problems, the algorithm is allowed to err on all other cases.
Yet another version of the problem is for some functions . The input to the algorithm is a basis and a number . It is assured that all the vectors in the Gram–Schmidt orthogonalization are of length at least 1, and that and that where is the dimension. The algorithm must accept if , and reject if . For large (), the problem is equivalent to because[3] a preprocessing done using the LLL algorithm makes the second condition (and hence, ) redundant.
In CVP, a basis of a vector space V and a metric M (often L2) are given for a lattice L, as well as a vector v in V but not necessarily in L. It is desired to find the vector in L closest to v (as measured by M). In the -approximation version , one must find a lattice vector at distance at most .
It is a more general case of the shortest vector problem. It is easy[4] to show that given an oracle for (defined below), one can solve by making some queries to the oracle. The naive method to find the shortest vector by calling the oracle to find the closest vector to 0 does not work because 0 is itself a lattice vector and the algorithm could potentially output 0.
The reduction from to is as follows: Suppose that the input to the problem is the basis for lattice . Consider the basis and let be the vector returned by . The claim is that the shortest vector in the set is the shortest vector in the given lattice.
Goldreich et al.[5] showed that any hardness of SVP implies the same hardness for CVP. Using PCP tools, Arora et al.[6] showed that CVP is hard to approximate within factor unless . Dinur et al.[7] strengthened this by giving a NP-hardness result with for .
The algorithm for CVP, especially the Fincke and Pohst variant[8], have been used, for example, for data detection in multiple-input multiple-output (MIMO) wireless communication systems (for coded and uncoded signals).[9] [10]
The Fincke-Pohst algorithm was applied in the field of the integer ambiguity resolution of carrier-phase GNSS (GPS). [11]
This problem is similar to the GapSVP problem. For , the input consists of a lattice basis and a vector and the algorithm must answer whether
The problem is trivially contained in NP for any approximation factor.
Schnorr[12], in 1987, showed that deterministic polynomial time algorithms can solve the problem for . Ajtai et al.[13] showed that probabilistic algorithms can achieve a slightly better approximation factor of
In 1993, Banaszczyk[14] showed that is in . In 2000, Goldreich and Goldwasser[15] showed that puts the problem in both NP and coAM. In 2005, Ahoronov and Regev[16] showed that for some constant , the problem with is in .
For lower bounds, Dinur et al.[17] showed in 1998 that the problem is NP-hard for .
Given a lattice L of dimension n, the algorithm must output n linearly independent so that where the right hand side considers all basis of the lattice.
In the -approximate version, given a lattice L with dimension n, find n linearly independent vectors of length max |||| ≤
This problem is similar to CVP. Given a vector such that its distance from the lattice is at most , the algorithm must output the closest lattice vector to it.
Given a basis for the lattice, the algorithm must find the largest distance (or in some versions, its approximation) from any vector to the lattice.
Many problems become easier if the input basis consists of short vectors. An algorithm that solves the Shortest Basis Problem (SBP) must, given an lattice basis, output an equivalent basis such that the length of the longest vector in is as short as possible.
The approximation version problem consist of finding a basis whose longest vector is at most times longer than the longest vector in the shortest basis.
Average case hardness of problems forms a basis for proofs-of-security for most cryptographic schemes. However, experimental evidence suggests that most NP-hard problems lack this property: they are probably only worst case hard. Many lattice problems have been conjectured or proven to be average-case hard, making them an attractive class of problems to base cryptographic schemes on. Moreover, worst-case hardness of some lattice problems have been used to create secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against Quantum computers.
The above lattice problems are easy to solve if the algorithm is provided with a "good" basis. Lattice reduction algorithms aim, given a basis for a lattice, to output a new basis consisting of relatively short, nearly orthogonal vectors. The LLL algorithm[18] was an early efficient algorithm for this problem which could output a almost reduced lattice basis in polynomial time. This algorithm and its further refinements were used to break several cryptographic schemes, establishing its status as a very important tool in cryptanalysis. The success of LLL on experimental data led to a belief that lattice reduction might be an easy problem in practice. However, this belief was challenged when in the late 1990s, several new results on the hardness of lattice problems were obtained, starting with the result of Ajtai.[19]
In his seminal papers[19][20], Ajtai showed that the SVP problem was NP-hard and discovered some connections between the worst-case complexity and average-case complexity of some lattice problems. Building on these results, Ajtai and Dwork[21] created a public-key cryptosystem whose security could be proven using only the worst case hardness of a certain version of SVP, thus making it the first[22] result to have used worst-case hardness to create secure systems.